25 research outputs found

    Laser speckle photography for surface tampering detection

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 59-61).It is often desirable to detect whether a surface has been touched, even when the changes made to that surface are too subtle to see in a pair of before and after images. To address this challenge, we introduce a new imaging technique that combines computational photography and laser speckle imaging. Without requiring controlled laboratory conditions, our method is able to detect surface changes that would be indistinguishable in regular photographs. It is also mobile and does not need to be present at the time of contact with the surface, making it well suited for applications where the surface of interest cannot be constantly monitored. Our approach takes advantage of the fact that tiny surface deformations cause phase changes in reflected coherent light which alter the speckle pattern visible under laser illumination. We take before and after images of the surface under laser light and can detect subtle contact by correlating the speckle patterns in these images. A key challenge we address is that speckle imaging is very sensitive to the location of the camera, so removing and reintroducing the camera requires high-accuracy viewpoint alignment. To this end, we use a combination of computational rephotography and correlation analysis of the speckle pattern as a function of camera translation. Our technique provides a reliable way of detecting subtle surface contact at a level that was previously only possible under laboratory conditions. With our system, the detection of these subtle surface changes can now be brought into the wild.by YiChang Shih.S.M

    Automatic Composition Recommendations for Portrait Photography

    Get PDF
    A user with no training in photography that takes pictures using a smartphone or other camera is often not able to capture attractive portrait photographs. This disclosure describes techniques to automatically determine optimal camera view-angles and frame elements, and to generate instructions to guide users to capture better composed photographs. An ultra-wide (UW) image is obtained via a stream parallel to a wide (W) image stream that the user previews during the capture of a photograph. The UW image is used as a guide to determine an optimal field of view (FoV) for the W-image, e.g., to determine an optimal foreground and background composition; to add elements that enhance artistic value; to omit elements that detract from artistic value; etc. Standard techniques of good photography, e.g., rule of thirds, optimal head orientation, etc. can be used to guide the user to obtain an optimal FoV that results in an attractive photograph

    Data-driven hallucination of different times of day from a single outdoor photo

    Get PDF
    We introduce "time hallucination": synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as "night". Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime.National Science Foundation (U.S.) (NSF No.0964004)National Science Foundation (U.S.) (NSF CGV-1111415

    Techniques for Wide-Angle Distortion Correction Using an Ellipsoidal Projection

    Get PDF
    This publication describes techniques of using an ellipsoidal projection for correcting image distortion caused by wide-angle cameras. Two-dimensional (2D) cartesian pixel coordinates of an image taken with a wide-angle camera are first back-projected to three-dimensional (3D) spherical pixel coordinates (e.g., onto a surface of a 3D sphere) using a focal length of the camera. The 3D spherical pixel coordinates are then projected to 3D ellipsoidal pixel coordinates (e.g., onto a surface of a 3D ellipsoid). Major and minor axes of the 3D ellipsoidal pixel coordinates are defined according to a determined face orientation within the image. The 3D ellipsoidal pixel coordinates are then projected to 2D cartesian pixel coordinates to arrive at a distortion-corrected 2D image

    Improved Object Detection in an Image by Correcting Regions with Distortion

    Get PDF
    This publication describes techniques and processes for correcting distortion in an image in order to improve object detection by an object detector on an imaging device. In order to avoid missing target objects (e.g., faces) in an image during object detection due to distortion, the object detector performs object detection on the image using a low-threshold value. A low-threshold value is associated with a lesser chance of missing target objects. The detection results are compared against regions of the image that are known to have distortion due to factors, such as a wide field of view (WFOV) lens of the camera. The overlapping areas of the detection results and the distorted regions are identified as candidate areas that would benefit from distortion correction. Using an algorithm, the candidate regions are corrected (e.g., undistorted, cropped, down-sampled, rotated, and/or frontalized) to reduce distortion. Object detection is performed again over the corrected candidate regions, resulting in improved confidence. The object detection results can then be used by the imaging device to provide a high-quality image and a positive user experience with the imaging device

    Techniques for Deblurring Faces in Images by Utilizing Multi-Camera Fusion

    Get PDF
    This publication describes techniques for deblurring faces in images by utilizing multi-camera (e.g., dual-camera) fusion processes. In the techniques, multiple cameras of a computing device (e.g., wide-angle camera, an ultrawide-angle camera) concurrently capture a scene. A multi-camera fusion technique is utilized to fuse the captured images together to generate an image with increased sharpness while preserving the brightness of the scene and other details under a motion scene. The images are processed by a Deblur Module, which includes an optical flow machine-learned model for generating a warped ultrawide-angle image, a subject mask trained to identify and mask faces detected in the wide-angle image, and an occlusion mask for handling occlusion artifacts. The warped ultrawide-angle image, the raw wide-angle image (with blurred faces), the sharp ultrawide-angle image, the subject mask, and the occlusion map are then stacked and merged (fused) using a machine-learning model to output a sharp image without the presence of motion blur. This publication further describes techniques utilizing adaptive multi-streaming to optimize power consumption and dual camera usage on computing devices

    Style transfer for headshot portraits

    Get PDF
    Headshot portraits are a popular subject in photography but to achieve a compelling visual style requires advanced skills that a casual photographer will not have. Further, algorithms that automate or assist the stylization of generic photographs do not perform well on headshots due to the feature-specific, local retouching that a professional photographer typically applies to generate such portraits. We introduce a technique to transfer the style of an example headshot photo onto a new one. This can allow one to easily reproduce the look of renowned artists. At the core of our approach is a new multiscale technique to robustly transfer the local statistics of an example portrait onto a new one. This technique matches properties such as the local contrast and the overall lighting direction while being tolerant to the unavoidable differences between the faces of two different people. Additionally, because artists sometimes produce entire headshot collections in a common style, we show how to automatically find a good example to use as a reference for a given portrait, enabling style transfer without the user having to search for a suitable example for each input. We demonstrate our approach on data taken in a controlled environment as well as on a large set of photos downloaded from the Internet. We show that we can successfully handle styles by a variety of different artists.Quanta Computer (Firm)Adobe System

    Automatic Removal of Lens Flare Artifacts

    Get PDF
    A strong light source in the field of view of a camera can cause small circular artifacts, known as lens flares or ghost dots, to appear at the mirror location of the light source with respect to the image center. Lens flares occur due to internal reflections within the lenses of the camera. While lens flares can be reduced with lens coatings, it is difficult to entirely eliminate lens flares. This disclosure describes software techniques to automatically detect and remove lens flare artifacts in images. Per the techniques, the presence and position of a strong light source in the field of view is detected in the captured image. Based on the detection, the flare ghosting dot is identified at the mirror location. The flare ghosting dot is masked using inpainting techniques and the result is evaluated. The described techniques can reliably remove lens flares of a wide variety of flare shapes. The techniques can be implemented in any camera device, including smartphone cameras

    Transform recipes for efficient cloud photo enhancement

    Get PDF
    Cloud image processing is often proposed as a solution to the limited computing power and battery life of mobile devices: it allows complex algorithms to run on powerful servers with virtually unlimited energy supply. Unfortunately, this overlooks the time and energy cost of uploading the input and downloading the output images. When transfer overhead is accounted for, processing images on a remote server becomes less attractive and many applications do not benefit from cloud offloading. We aim to change this in the case of image enhancements that preserve the overall content of an image. Our key insight is that, in this case, the server can compute and transmit a description of the transformation from input to output, which we call a transform recipe. At equivalent quality, our recipes are much more compact than JPEG images: this reduces the client's download. Furthermore, recipes can be computed from highly compressed inputs which significantly reduces the data uploaded to the server. The client reconstructs a high-fidelity approximation of the output by applying the recipe to its local high-quality input. We demonstrate our results on 168 images and 10 image processing applications, showing that our recipes form a compact representation for a diverse set of image filters. With an equivalent transmission budget, they provide higher-quality results than JPEG-compressed input/output images, with a gain of the order of 10 dB in many cases. We demonstrate the utility of recipes on a mobile phone by profiling the energy consumption and latency for both local and cloud computation: a transform recipe-based pipeline runs 2--4x faster and uses 2--7x less energy than local or naive cloud computation.Qatar Computing Research InstituteUnited States. Defense Advanced Research Projects Agency (Agreement FA8750-14-2-0009)Stanford University. Stanford Pervasive Parallelism LaboratoryAdobe System
    corecore